Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs (>70%) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions (>65%). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
在监督机器学习的背景下,学习曲线描述了模型在看不见的数据上的性能如何与用于训练模型的样本数量有关。在本文中,我们介绍了植物图像的数据集,其中包括不同生长阶段的曼尼托巴省大草原共有的农作物和杂草的代表。我们通过Resnet体系结构确定该数据上的分类任务的学习曲线。我们的结果与以前的研究一致,并增加了以下证据:学习曲线受大规模,应用和模型的权力关系的约束。我们进一步研究标签噪声和可训练参数的减少如何影响该数据集的学习曲线。这两种效应都导致模型需要过多的较大训练集,以实现与没有这些效果的相同分类性能。
translated by 谷歌翻译
最近,开发了EAGL-I系统是为了迅速创建大量标记的植物数据集,该数据集旨在被农民和研究人员普遍使用,以创建农业中的AI驱动解决方案。结果,由40,000张图像组成的公开植物识别数据集与系统一起创建了由8种植物物种组成的不同尺寸的图像,以证明其能力。本文提出了一种新颖的方法,称为可变重叠的时间连续滑动窗口(fotcsw),该方法将由图像组成的图像转换为具有可变大小的图像的数据集,为3D表示,具有适合卷积神经网络的固定大小,并证明了此表示形式是比将数据集的图像调整到给定尺寸的信息更丰富。我们从理论上正式化了该方法的用例及其固有的属性,我们证明了它对数据具有过采样和正则化效果。通过将Fotcsw方法与最近提出的称为1维多项式神经网络的机器学习模型的3D扩展相结合,我们能够创建一个模型,该模型在数据集中创建的数据集中达到了99.9%的最新精度, EAGL-I系统超过了众所周知的建筑,例如重新系统和启动。此外,我们创建了一种启发式算法,该算法能够降低任何预先训练的N维多项式神经网络,并在不改变其性能的情况下压缩它,从而使模型更快,更轻。此外,我们确定当前可用的数据集无法以目前的形式用于机器学习,这是因为训练集和测试集之间存在很大的类不平衡。因此,我们创建了一个特定的预处理和模型开发框架,使我们能够将准确性从49.23%提高到99.9%。
translated by 谷歌翻译
被称为超声心动图的心脏成像是一种非侵入性工具,用于生成包括图像和视频的数据,心脏病专家用来诊断心脏异常,尤其是心肌梗死(MI)。超声心动图机可以提供大量数据,需要由心脏病专家快速分析,以帮助他们做出诊断和治疗心脏病。但是,获得的数据质量取决于购置条件以及患者对设置说明的响应能力。这些限制对医生的挑战尤其是当患者面对MI并且他们的生命受到威胁时。在本文中,我们提出了一种基于卷积神经网络(CNN)的创新实时端到端全自动模型,以根据由左心室(LV)的区域壁运动异常(RWMA)检测到MI,该模型是由左心室(LV)的视频中的。超声心动图。我们的模型是由2D CNN组成的管道实现Mi。我们在由165个超声心动图视频组成的数据集上培训了两个CNN,每个CNN从一个独特的患者中获得。 2D CNN在数据分割方面达到了97.18%的精度,而3D CNN获得了90.9%的精度,100%的精度和95%的召回率。我们的结果表明,创建一个完全自动化的MI检测系统是可行且有利的。
translated by 谷歌翻译
除了极其非线性的情况外,如果不是数十亿个参数来解决或至少要获得良好的解决方案,并且众所周知,众所周知,众所周知,并且通过深化和扩大其拓扑来实现复杂性的神经网络增加更好近似所需的非线性水平。然而,紧凑的拓扑始终优先于更深的拓扑,因为它们提供了使用较少计算单元和更少参数的优势。这种兼容性以减少的非线性的价格出现,因此有限的解决方案搜索空间。我们提出了使用自动多项式内核估计的1维多项式神经网络(1DPNN)模型,用于1维卷积神经网络(1dcnns),并且从第一层引入高度的非线性,这可以补偿深度的需要和/或宽拓扑。我们表明,这种非线性使得模型能够产生比与音频信号相关的各种分类和回归问题的常规1dcnn的计算和空间复杂性更好的结果,即使它在神经元水平上引入了更多的计算和空间复杂性。实验在三个公共数据集中进行,并证明,在解决的问题上,所提出的模型可以在更少的时间内从数据中提取比1dcnn更多的相关信息,并且存储器较少。
translated by 谷歌翻译
Remote sensing imagery provides comprehensive views of the Earth, where different sensors collect complementary data at different spatial scales. Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a $5.0\%$ non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a $0.9$ mIoU to $3.8$ mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales.
translated by 谷歌翻译
Managing novelty in perception-based human activity recognition (HAR) is critical in realistic settings to improve task performance over time and ensure solution generalization outside of prior seen samples. Novelty manifests in HAR as unseen samples, activities, objects, environments, and sensor changes, among other ways. Novelty may be task-relevant, such as a new class or new features, or task-irrelevant resulting in nuisance novelty, such as never before seen noise, blur, or distorted video recordings. To perform HAR optimally, algorithmic solutions must be tolerant to nuisance novelty, and learn over time in the face of novelty. This paper 1) formalizes the definition of novelty in HAR building upon the prior definition of novelty in classification tasks, 2) proposes an incremental open world learning (OWL) protocol and applies it to the Kinetics datasets to generate a new benchmark KOWL-718, 3) analyzes the performance of current state-of-the-art HAR models when novelty is introduced over time, 4) provides a containerized and packaged pipeline for reproducing the OWL protocol and for modifying for any future updates to Kinetics. The experimental analysis includes an ablation study of how the different models perform under various conditions as annotated by Kinetics-AVA. The protocol as an algorithm for reproducing experiments using the KOWL-718 benchmark will be publicly released with code and containers at https://github.com/prijatelj/human-activity-recognition-in-an-open-world. The code may be used to analyze different annotations and subsets of the Kinetics datasets in an incremental open world fashion, as well as be extended as further updates to Kinetics are released.
translated by 谷歌翻译
The availability of frequent and cost-free satellite images is in growing demand in the research world. Such satellite constellations as Landsat 8 and Sentinel-2 provide a massive amount of valuable data daily. However, the discrepancy in the sensors' characteristics of these satellites makes it senseless to use a segmentation model trained on either dataset and applied to another, which is why domain adaptation techniques have recently become an active research area in remote sensing. In this paper, an experiment of domain adaptation through style-transferring is conducted using the HRSemI2I model to narrow the sensor discrepancy between Landsat 8 and Sentinel-2. This paper's main contribution is analyzing the expediency of that approach by comparing the results of segmentation using domain-adapted images with those without adaptation. The HRSemI2I model, adjusted to work with 6-band imagery, shows significant intersection-over-union performance improvement for both mean and per class metrics. A second contribution is providing different schemes of generalization between two label schemes - NALCMS 2015 and CORINE. The first scheme is standardization through higher-level land cover classes, and the second is through harmonization validation in the field.
translated by 谷歌翻译
This paper presents a novel approach to the acquisition of language models from corpora. The framework builds on Cobweb, an early system for constructing taxonomic hierarchies of probabilistic concepts that used a tabular, attribute-value encoding of training cases and concepts, making it unsuitable for sequential input like language. In response, we explore three new extensions to Cobweb -- the Word, Leaf, and Path variants. These systems encode each training case as an anchor word and surrounding context words, and they store probabilistic descriptions of concepts as distributions over anchor and context information. As in the original Cobweb, a performance element sorts a new instance downward through the hierarchy and uses the final node to predict missing features. Learning is interleaved with performance, updating concept probabilities and hierarchy structure as classification occurs. Thus, the new approaches process training cases in an incremental, online manner that it very different from most methods for statistical language learning. We examine how well the three variants place synonyms together and keep homonyms apart, their ability to recall synonyms as a function of training set size, and their training efficiency. Finally, we discuss related work on incremental learning and directions for further research.
translated by 谷歌翻译